Goto

Collaborating Authors

 create deepfake


AI apocalypse team formed to fend off catastrophic nuclear and biochemical doomsday scenarios

FOX News

AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Artificial intelligence (AI) is advancing rapidly, bringing unprecedented benefits to us, yet it also poses serious risks, such as chemical, biological, radiological and nuclear (CBRN) threats, that could have catastrophic consequences for the world. How can we ensure that AI is used for good and not evil? How can we prepare for the worst-case scenarios that might arise from AI? CLICK TO GET KURT'S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO'S TO MAKE YOU SMARTER These are some of the questions that OpenAI, a leading AI research lab and the company behind ChatGPT, is trying to answer with its new Preparedness team. Its mission is to track, evaluate, forecast and protect against the frontier risks of AI models.


AI Can Create Deepfakes for You Now

#artificialintelligence

The field of artificial intelligence (AI) is advancing at an unprecedented pace, and the latest breakthrough comes in the form of video generation systems. These systems can instantly create videos from a short description, making video creation as easy as typing a quick note. The technology has been developed by several companies, including giants like Google and Microsoft, as well as smaller start-ups like Runway. One of the most impressive aspects of these video-generation systems is their ability to produce highly realistic videos in just a few minutes. For example, if you type "a tranquil river in the forest" into the system, it will quickly generate a short video of a river flowing through a forest, complete with sunlight streaming through the trees and birds chirping in the background.


Weekly Top 10 Automation Articles

#artificialintelligence

This Week Top Automation write-ups highlight the concern of experts over the use of machine learning tools to create deepfake which could be used to fulfill illegal interests on a larger scale. At one level, the technology is threatening us but on the other hand, it will be helping to provide AI-based ID verification for transactions and other purposes. Moreover, Google just announced that their web services are undergoing a significant transformation and workspace will be available for all users. While Microsoft will allow their Xbox One owners to play next-gen Xbox games through the xCloud service. There is much more to Explore.


Latest Model That Might Replace GANs To Create Deepfakes

#artificialintelligence

Recently, a team of researchers from UC Berkeley and Adobe Research proposed a new machine learning model known as Swapping Autoencoder, which has the capability to perform image manipulation. The key idea of this research is to encode a picture into 2 independent components and then enforce that any swapped combination maps to a realistic image. Deep generative models such as GANs or Generative Adversarial Networks and Variational Autoencoders (VAEs) have gained much traction by the researchers over the years. According to the researchers, deep generative models have become a popular technique when it comes to producing realistic images from randomly sampled data. However, such deep generative models face various challenges when used for a controllable manipulation of existing images.


Researchers detail texture-swapping AI that could be used to create deepfakes

#artificialintelligence

In a preprint paper published on Arxiv.org, They claim it can modify any image in a variety ways, including texture swapping, while remaining "substantially" more efficient compared with previous generative models. The researchers acknowledge that their work could be used to create deepfakes, or synthetic media in which a person in an existing image or video is replaced with someone else's likeness. In a human perceptual study, subjects were fooled 31% of the time by images created using the Swapping Autoencoder. But they also say that proposed detectors can successfully spot images manipulated by the tool at least 73.9% of the time, suggesting the Swapping Autoencoder is no more harmful than other AI-powered image manipulation tools.


Create Deepfakes in 5 Minutes with First Order Model Method

#artificialintelligence

The basis of deepfakes, or image animation in general, is to combine the appearance extracted from a source image with motion patterns derived from a driving video. For these purposes deepfakes use deep learning, where their name comes from (deep learning fake). To be more precise, they are created using the combination of autoencoders and GANs. Autoencoder is a simple neural network, that utilizes unsupervised learning (or self-supervised if we want to be more accurate). They are called like that because they automatically encode information and usually are used for dimensionality reduction.


Rekognition still racist, politicians desperate over deepfakes, and a good reason to go to (some) music festivals

#artificialintelligence

Roundup Here's our latest summary of AI news beyond what we've already covered. Over 40 festivals pledge to not use facial recognition: A campaign against facial recognition led by the nonprofit Fight for the Future has led to over 40 music festivals publicly committing that they would not use the technology. Evan Greer, deputy director, and Tom Morello, a musician and guitarist for rock band Rage Against the Machine, teamed up to pen an op-ed celebrating the efforts to push back on the smart AI cameras. "Over the last month, artists and fans waged a grassroots war to stop Orwellian surveillance technology from invading live music events," they wrote on Buzzfeed News. Our campaign pushed more than 40 of the world's largest music festivals -- like Coachella, Bonnaroo, and SXSW -- to go on the record and state clearly that they have no plans to use facial recognition technology at their events." Musicians and fans were invited to write to their favorite festival organizers, urging them to not support facial recognition. Now, the list of festivals that have confirmed they won't be using the tech has grown. There are still a few top names that have yet to respond, however, including Burning Man and Outside Lands. You can see the complete list here. Amazon's facial recognition tool fails on black athletes: Amazon's controversial Rekognition software mistook the faces of 27 black athletes competing in American football, baseball, basketball, and hockey, as suspected criminals in a mugshot database. An experiment by the American Civil Liberties Union (ACLU) revealed the dangers of relying on facial recognition technology like Rekognition. "This technology is flawed," said Duron Harmon, a football player for the New England Patriots safety whose face was false identified in the experiment. "If it misidentified me, my teammates, and other professional athletes in an experiment, imagine the real-life impact of false matches.


The fight against deepfakes

#artificialintelligence

Last week at the Black Hat cybersecurity conference in Las Vegas, the Democratic National Committee tried to raise awareness of the dangers of AI-doctored videos by displaying a deepfaked video of DNC Chair Tom Perez. Deepfakes are videos that have been manipulated, using deep learning tools, to superimpose a person's face onto a video of someone else. As the 2020 presidential election draws near, there's increasing concern over the potential threats deepfakes pose to the democratic process. In June, the U.S. Congress House Permanent Select Committee on Intelligence held a hearing to discuss the threats of deefakes and other AI-manipulated media. But there's doubt over whether tech companies are ready to deal with deepfakes.